TSRS (Traffic Sign Recognition System) may plays a significant role in self driving car, artificial driver assistances, traffic surveillance as well as traffic safety. Traffic sign recognition is necessary to overcome the traffic related difficulties. The traffic sign recognition system has two parts localization and recognition. In localization part, where traffic sign region is located and identified by creating a rectangular area. After that, in recognition part the rectangular box provided the result for which traffic sign is located in that particular region. Finally, the detected road traffic signs are classified based on deep learning. In this article, a traffic sign detection and identification method on account of the image processing is proposed, which is combined with convolutional neural network (CNN) to sort traffic signs. On account of its high recognition rate, CNN can be used to realize various computer vision tasks. TensorFlow is used to implement CNN. In the German data sets, we are able to identify the circular symbol with more than 98.2% accuracy.
Introduction
I. INTRODUCTION
With the rapid growth of technological development, vehicles have become an essentialportion of in our routine lives. Because driving vehicles without follow traffic rules,it creates more and more intricate traffic on the road. As a result, it is one of the majorreasons behind accidents every year. In recent times road accidents are happeningregularly in increasing manner across the world. Leading reason of most road accidentsis the ignorance or unawareness of the traffic sign. The meaning of traffic sign is anyentity, device, or board on the road that entity carries the rules, indicates the warningor provides other explanation regarding driving. Therefore, it also provides necessaryinformation through traffic signals and traffic control devices to continue smooth cardriving.
According to global road crash data, approximately 1.3 million people die in traffic accidents each year, averaging 3,287 deaths each day. Unfortunately, drunk driving, reckless driving, fatigue, and driver distraction continue to be the leading causes of road deaths. With today's traffic control technologies, there's a good chance the motorist will miss part of the traffic. A survey of Fatalities in 1970 to 2019 is shown the below survey on traffic signs.
The graphical representation of the above indicator. It will be noted from the chart that because of the steep increase in vehicle population over the years which has grown at CAGR of 11.15% during the period 1970 to 2019, the Road accident rate normalized for vehicles, through more than the Road accident death rate, shows a downward trend. On the other hand, Road accident Risk and Road death risk (normalized for population) both shows an upward trend with road accident risk higher than the death risk.
An on-board computer vision system that can detect and identify traffic signs could help drivers avoid accidents in a variety of ways. The on-board vision technology might supplement reality by displaying forthcoming warning signs ahead of time, or even keeping them shown on a screen after the sign has past. This would make it less likely that the driver would miss an important sign. Traffic sign recognition is a system that allows a vehicle to detect traffic signs
III. SOFTWARE REQUIREMENT SPECIFICATIONS
As traffic signs need to be easily perceivable, they are brightly coloured in red, blue, and yellow. Hence, the detection is often based on a pre-segmentation of the image to reduce the search space and retrieve Regions of Interest (ROIs). Since the direct thresholding of the RGB channels is sensitive to changes in illumination, the relation between the RGB (Red Green Blue) colours is often used. The colour enhancement is used to extract red, blue and yellow blobs.
This transform emphasizes the pixels where the given colour channel is dominant over the other two in the RGB colour space. Chromatic and achromatic filters are used to extract the red rims and the white interior of the speed limit and warning traffic signs respectively. The HSI model (Hue Saturation Intensity) is used as it is invariant to illumination changes. Empirically determined fixed thresholds define the range of each HSI channel in which lie the red and blue traffic sign candidates. It is pointed out that HSI is computationally expensive due to its nonlinear formulae.
IV. METHODOLOGY
A. Training Dataset
The dataset contains more than 50,000 images of different traffic signs. It is further classified into 43 different classes. The dataset is quite varying; some of the classes have many images while some classes have few images. Traffic images of each category are divided into the training dataset and test dataset, where the training dataset is contains of classes, and every category contains various image. Most of these images are collected from the internet and selected according to the specific requirements.
A. Data Acquisition
Input traffic data will be acquired by the system, the image taken will be proposed per frame by the system. Some testing process will make use of traffic images. For, the detection phase, a set of local traffic images from Google images containing actual road and traffic images with different traffic signs to be detected will be used to test the models. Another set of data is composed of road traffic images from dataset is containing actual road with traffic signs to be detected captured in different distances. The last set of data is actual image from local to test the full performance of the traffic sign detection and recognition system.
B. Pre-processing
Preprocessing is one of the most important steps in text classification. The detection phase is composed of pre-processing, color based segmentation, shape-based detection and object localization. This study will evaluate four different pre-processing and color based segmentation methods that solved the problem of lighting variations affecting the detection phase.
C. Convolutional Neural Network (CNN)
CNNs are a class of Deep Neural Networks that can recognize and classify particular features from images and are widely used for analysing visual images. Their applications range from image and image recognition, image classification, medical image analysis, computer vision and natural language processing. The term ‘Convolution” in CNN denotes the mathematical function of convolution which is a special kind of linear operation wherein two functions are multiplied to produce a third function which expresses how the shape of one function is modified by the other. In simple terms, two images which can be represented as matrices are multiplied to give an output that is used to extract features from the image.
Extraction, the traffic sign is classified and segmented into multiple pixels. This feature extraction results in the extraction of multiple features of the image. Then the training process is carried on by input image pixels which can differentiate from each other. Clustering is a process that segregates different objects in such a way similar objects are placed into similar groups. The Classification of traffic signs is filtered based upon the size and shape of the image. Filtration is a process to detect the edge of image blur image and a shaded image is detected with the kernel sliding in the extracted pixel of the image. Normally a Convolution Neural Network recognizes this field and learns everything by the basic shapes by evolving the many features in the training process. The CNN learns everything in such a way that this can distinguish from one sign to the other sign. This Max-pooling is a technique that decreases the density and to classify the respective sign. In the detection of traffic Sign image, the Gaussian technique is introduced to reduce the noise.
D. Data Classifier
To classify the candidate traffic signs images, learning algorithms will be used. For this study, learning algorithms are implemented using in python. The classifiers to be evaluated based on accuracy and processing speed for traffic sign recognition. The fig 6 shows the classification of phase.
VIII. FUTURE WORK
Our algorithm is continuous in detecting the signs which leads to detecting signs even there are no signs in the area, which leads to continuous flow of output. This results in false detection or unnecessary detection. This could be improved by increasing the threshold value for detecting sign. The overall performance can also be improved and customized with the help of more datasets from different countries.
Conclusion
A CNN based on transfer of learning method is put forward. Deep CNN is trained with big data
set, and then effective regional convolutional neural network (RCNN) detection is obtained through a spot of standard traffic training examples. A neural network can learn independently as well as generate results that aren\'t limited to the input. Since it stores input in its own network, rather than a database, data loss has no effect on its operation. Instead of using a pre-defined neural network model, created the own. That model was more efficient because of the torch library instead of any other library. Using the torch technique are detecting live streams, the recognition process is faster and easier. They have used the neural network for traffic sign recognition due to its state-of-the-art accuracy. Therefore, combining computer vision and deep learning to develop a real time traffic sign recognition system. In this model, a voice alert signals the driver when the sign is detected. With this system, the driver never misses a traffic signal because a person receives an alert before crossing the sign. Thus, the driver is more likely to drive on the road safely. Furthermore, it allows the driver to stay within the traffic laws.
References
[1] KunalDahiya, Dinesh Singh, C Krishna Mohan , July (2016) ‘Automatic detection of bike riders without helmet using surveillance videos in real-time’, International joint conference on neural networks (IJCNN), pp.345-453
[2] H. N. Do, M.-T. Vo, H. Q. Luong, A. H. Nguyen, K. Trang, and L. T. Vu,(2017) “Speed limit traffic sign detection and recognition based on support vector machines,” in 2017 International Conference on Advanced Technologies for Communications (ATC). IEEE , pp.274–278.
[3] Sri Uthra V, Sariga Devi V, Vaishali K S, Padma Priya S, Feb (2020) ‘Helmet Violation using Deep Learning’, International Research Journal of Engineering and Technology (IRJET), vol 07, pp3091-3095,
[4] Prajwal M J, Tejas K B, VarshadV, Mahesh MadivalappaMurgod, Shashidhar R, ‘Detection of NonHelmet Riders and Extraction of License Plate Number using Yolo v2 and OCR Method’, Dec (2019) International Journal of Innovative Technology and Exploring Engineering (IJITEE), vol 09, pp 5167-5172, Dec2019
[5] K C Dharma Raj, AphinyaChairat, VasanTimtong, Matthew N Dailey, MongkolEkpanyapong, Jan (2018) ‘Helmet Violation processing using Deep Learning’, International Workshop on Advanced Image Technology(IWAIT),